17 research outputs found

    We have to go back: A Historic IP Attribution Service for Network Measurement

    Get PDF
    Researchers and practitioners often face the issue of having to attribute anIP address to an organization. For current data this is comparably easy, usingservices like whois or other databases. Similarly, for historic data, severalentities like the RIPE NCC provide websites that provide access to historicrecords. For large-scale network measurement work, though, researchers oftenhave to attribute millions of addresses. For current data, Team Cymru providesa bulk whois service which allows bulk address attribution. However, at thetime of writing, there is no service available that allows historic bulkattribution of IP addresses. Hence, in this paper, we introduce and evaluateour `Back-to-the-Future whois' service, allowing historic bulk attribution ofIP addresses on a daily granularity based on CAIDA Routeviews aggregates. Weprovide this service to the community for free, and also share ourimplementation so researchers can run instances themselves.<br

    Exploring EDNS-client-subnet adopters in your free time

    Full text link

    {SoK}: {An} Analysis of Protocol Design: Avoiding Traps for Implementation and Deployment

    No full text
    Today's Internet utilizes a multitude of different protocols. While some of these protocols were first implemented and used and later documented, other were first specified and then implemented. Regardless of how protocols came to be, their definitions can contain traps that lead to insecure implementations or deployments. A classical example is insufficiently strict authentication requirements in a protocol specification. The resulting Misconfigurations, i.e., not enabling strong authentication, are common root causes for Internet security incidents. Indeed, Internet protocols have been commonly designed without security in mind which leads to a multitude of misconfiguration traps. While this is slowly changing, to strict security considerations can have a similarly bad effect. Due to complex implementations and insufficient documentation, security features may remain unused, leaving deployments vulnerable. In this paper we provide a systematization of the security traps found in common Internet protocols. By separating protocols in four classes we identify major factors that lead to common security traps. These insights together with observations about end-user centric usability and security by default are then used to derive recommendations for improving existing and designing new protocols---without such security sensitive traps for operators, implementors and users

    {SoK}: {An} Analysis of Protocol Design: Avoiding Traps for Implementation and Deployment

    No full text
    Today's Internet utilizes a multitude of different protocols. While some of these protocols were first implemented and used and later documented, other were first specified and then implemented. Regardless of how protocols came to be, their definitions can contain traps that lead to insecure implementations or deployments. A classical example is insufficiently strict authentication requirements in a protocol specification. The resulting Misconfigurations, i.e., not enabling strong authentication, are common root causes for Internet security incidents. Indeed, Internet protocols have been commonly designed without security in mind which leads to a multitude of misconfiguration traps. While this is slowly changing, to strict security considerations can have a similarly bad effect. Due to complex implementations and insufficient documentation, security features may remain unused, leaving deployments vulnerable. In this paper we provide a systematization of the security traps found in common Internet protocols. By separating protocols in four classes we identify major factors that lead to common security traps. These insights together with observations about end-user centric usability and security by default are then used to derive recommendations for improving existing and designing new protocols---without such security sensitive traps for operators, implementors and users

    Steering hyper-giants' traffic at scale

    Get PDF
    Large content providers, known as hyper-giants, are responsible for sending the majority of the content traffic to consumers. These hyper-giants operate highly distributed infrastructures to cope with the ever-increasing demand for online content. To achieve 40 commercial-grade performance of Web applications, enhanced end-user experience, improved reliability, and scaled network capacity, hyper-giants are increasingly interconnecting with eyeball networks at multiple locations. This poses new challenges for both (1) the eyeball networks having to perform complex inbound traffic engineering, and (2) hyper-giants having to map end-user requests to appropriate servers. We report on our multi-year experience in designing, building, rolling-out, and operating the first-ever large scale system, the Flow Director, which enables automated cooperation between one of the largest eyeball networks and a leading hyper-giant. We use empirical data collected at the eyeball network to evaluate its impact over two years of operation. We find very high compliance of the hyper-giant to the Flow Director’s recommendations, resulting in (1) close to optimal user-server mapping, and (2) 15% reduction of the hyper-giant’s traffic overhead on the ISP’s long-haul links, i.e., benefits for both parties and end-users alike.EC/H2020/679158/EU/Resolving the Tussle in the Internet: Mapping, Architecture, and Policy Making/ResolutioNe
    corecore